Goto

Collaborating Authors

 algorithm portfolio


Landscape Features in Single-Objective Continuous Optimization: Have We Hit a Wall in Algorithm Selection Generalization?

Cenikj, Gjorgjina, Petelin, Gašper, Seiler, Moritz, Cenikj, Nikola, Eftimov, Tome

arXiv.org Artificial Intelligence

Motivated by the potential to capitalize on the varied performance of different algorithms across sets of different problem instances, the algorithm selection (AS) task targets the automated identification of a preferred optimization algorithm to solve a particular problem instance Kotthoff [2016], Kerschke et al. [2019]. Conventionally, AS is performed by taking into account the properties of the problem instance, which are typically described in the form of a numerical vector representation, also referred to as problem landscape features. Once a problem instance is represented in a vector form, Machine Learning (ML) models can be used to capture the relation between problem landscape features and algorithm performance, and further identify the best algorithm for a problem instance. In the field of single-objective continuous optimization, the most common choice of problem landscape features used to represent problem instances are the Exploratory Landscape Analysis (ELA) [Mersmann et al., 2011] features.


An Item Response Theory-based R Module for Algorithm Portfolio Analysis

Oldfield, Brodie, Kandanaarachchi, Sevvandi, Xu, Ziqi, Muñoz, Mario Andrés

arXiv.org Artificial Intelligence

Experimental evaluation is crucial in AI research, especially for assessing algorithms across diverse tasks. Many studies often evaluate a limited set of algorithms, failing to fully understand their strengths and weaknesses within a comprehensive portfolio. This paper introduces an Item Response Theory (IRT) based analysis tool for algorithm portfolio evaluation called AIRT-Module. Traditionally used in educational psychometrics, IRT models test question difficulty and student ability using responses to test questions. Adapting IRT to algorithm evaluation, the AIRT-Module contains a Shiny web application and the R package airt. AIRT-Module uses algorithm performance measures to compute anomalousness, consistency, and difficulty limits for an algorithm and the difficulty of test instances. The strengths and weaknesses of algorithms are visualised using the difficulty spectrum of the test instances. AIRT-Module offers a detailed understanding of algorithm capabilities across varied test instances, thus enhancing comprehensive AI method assessment. It is available at https://sevvandi.shinyapps.io/AIRT/ .


On Constructing Algorithm Portfolios in Algorithm Selection for Computationally Expensive Black-box Optimization in the Fixed-budget Setting

Yoshikawa, Takushi, Tanabe, Ryoji

arXiv.org Artificial Intelligence

Feature-based offline algorithm selection has shown its effectiveness in a wide range of optimization problems, including the black-box optimization problem. An algorithm selection system selects the most promising optimizer from an algorithm portfolio, which is a set of pre-defined optimizers. Thus, algorithm selection requires a well-constructed algorithm portfolio consisting of efficient optimizers complementary to each other. Although construction methods for the fixed-target setting have been well studied, those for the fixed-budget setting have received less attention. Here, the fixed-budget setting is generally used for computationally expensive optimization, where a budget of function evaluations is small. In this context, first, this paper points out some undesirable properties of experimental setups in previous studies. Then, this paper argues the importance of considering the number of function evaluations used in the sampling phase when constructing algorithm portfolios, whereas the previous studies ignored that. The results show that algorithm portfolios constructed by our approach perform significantly better than those by the previous approach.


PS-AAS: Portfolio Selection for Automated Algorithm Selection in Black-Box Optimization

Kostovska, Ana, Cenikj, Gjorgjina, Vermetten, Diederick, Jankovic, Anja, Nikolikj, Ana, Skvorc, Urban, Korosec, Peter, Doerr, Carola, Eftimov, Tome

arXiv.org Artificial Intelligence

The performance of automated algorithm selection (AAS) strongly depends on the portfolio of algorithms to choose from. Selecting the portfolio is a non-trivial task that requires balancing the trade-off between the higher flexibility of large portfolios with the increased complexity of the AAS task. In practice, probably the most common way to choose the algorithms for the portfolio is a greedy selection of the algorithms that perform well in some reference tasks of interest. We set out in this work to investigate alternative, data-driven portfolio selection techniques. Our proposed method creates algorithm behavior meta-representations, constructs a graph from a set of algorithms based on their meta-representation similarity, and applies a graph algorithm to select a final portfolio of diverse, representative, and non-redundant algorithms. We evaluate two distinct meta-representation techniques (SHAP and performance2vec) for selecting complementary portfolios from a total of 324 different variants of CMA-ES for the task of optimizing the BBOB single-objective problems in dimensionalities 5 and 30 with different cut-off budgets. We test two types of portfolios: one related to overall algorithm behavior and the `personalized' one (related to algorithm behavior per each problem separately). We observe that the approach built on the performance2vec-based representations favors small portfolios with negligible error in the AAS task relative to the virtual best solver from the selected portfolio, whereas the portfolios built from the SHAP-based representations gain from higher flexibility at the cost of decreased performance of the AAS. Across most considered scenarios, personalized portfolios yield comparable or slightly better performance than the classical greedy approach. They outperform the full portfolio in all scenarios.


Comprehensive Algorithm Portfolio Evaluation using Item Response Theory

Kandanaarachchi, Sevvandi, Smith-Miles, Kate

arXiv.org Artificial Intelligence

Item Response Theory (IRT) has been proposed within the field of Educational Psychometrics to assess student ability as well as test question difficulty and discrimination power. More recently, IRT has been applied to evaluate machine learning algorithm performance on a single classification dataset, where the student is now an algorithm, and the test question is an observation to be classified by the algorithm. In this paper we present a modified IRT-based framework for evaluating a portfolio of algorithms across a repository of datasets, while simultaneously eliciting a richer suite of characteristics - such as algorithm consistency and anomalousness - that describe important aspects of algorithm performance. These characteristics arise from a novel inversion and reinterpretation of the traditional IRT model without requiring additional dataset feature computations. We test this framework on algorithm portfolios for a wide range of applications, demonstrating the broad applicability of this method as an insightful algorithm evaluation tool. Furthermore, the explainable nature of IRT parameters yield an increased understanding of algorithm portfolios.


Explainable Model-specific Algorithm Selection for Multi-Label Classification

Kostovska, Ana, Doerr, Carola, Džeroski, Sašo, Kocev, Dragi, Panov, Panče, Eftimov, Tome

arXiv.org Artificial Intelligence

Multi-label classification (MLC) is an ML task of predictive modeling in which a data instance can simultaneously belong to multiple classes. MLC is increasingly gaining interest in different application domains such as text mining, computer vision, and bioinformatics. Several MLC algorithms have been proposed in the literature, resulting in a meta-optimization problem that the user needs to address: which MLC approach to select for a given dataset? To address this algorithm selection problem, we investigate in this work the quality of an automated approach that uses characteristics of the datasets - so-called features - and a trained algorithm selector to choose which algorithm to apply for a given task. For our empirical evaluation, we use a portfolio of 38 datasets. We consider eight MLC algorithms, whose quality we evaluate using six different performance metrics. We show that our automated algorithm selector outperforms any of the single MLC algorithms, and this is for all evaluated performance measures. Our selection approach is explainable, a characteristic that we exploit to investigate which meta-features have the largest influence on the decisions made by the algorithm selector. Finally, we also quantify the importance of the most significant meta-features for various domains.


Facebook Adds This New Framework to It's Reinforcement Learning Arsenal - KDnuggets

#artificialintelligence

Building deep reinforcement learning(DRL) systems remains an incredibly challenging. As a nascent discipline in the deep learning space, the frameworks and tools for implementing DRL models remain incredibly basic. Furthermore, the core innovation in DRL is coming from the big corporate AI labs like DeepMind, Facebook or Google. Almost a year ago, Facebook open sourced Horizon a framework focused on streamlining the implementation of DRL solutions. After a year using Horizon and implementing large scale DRL systems, Facebook open sourced ReAgent, a new framework that expands the original vision of Horizon to the implementation of end-to-end reasoning systems.


A Portfolio Approach to Algorithm Selection for Discrete Time-Cost Trade-off Problem

Mungle, Santosh

arXiv.org Artificial Intelligence

It is a known fact that the performance of optimization algorithms for NP-Hard problems vary from instance to instance. We observed the same trend when we comprehensively studied multi-objective evolutionary algorithms (MOEAs) on a six benchmark instances of discrete time-cost trade-off problem (DTCTP) in a construction project. In this paper, instead of using a single algorithm to solve DTCTP, we use a portfolio approach that takes multiple algorithms as its constituent. We proposed portfolio comprising of four MOEAs, Non-dominated Sorting Genetic Algorithm II (NSGA-II), the strength Pareto Evolutionary Algorithm II (SPEA-II), Pareto archive evolutionary strategy (PAES) and Niched Pareto Genetic Algorithm II (NPGA-II) to solve DTCTP. The result shows that the portfolio approach is computationally fast and qualitatively superior to its constituent algorithms for all benchmark instances. Moreover, portfolio approach provides an insight in selecting the best algorithm for all benchmark instances of DTCTP.


Using an Algorithm Portfolio to Solve Sokoban

Froleyks, Nils Christian (Karlsruhe Institute of Technology) | Balyo, Tomas (Karlsruhe Institute of Technology)

AAAI Conferences

The game of Sokoban is an interesting platform for algorithm research. It is hard for humans and computers alike. Even small levels can take a lot of computation for all known algorithms. In this paper we will describe how a search based Sokoban solver can be structured and which algorithms can be used to realize each critical part. We implement a variety of those, construct a number of different solvers and combine them into an algorithm portfolio. The solver we construct this way can outperform existing solvers when run in parallel, that is, our solver with 16 processors outperforms the previous sequential solvers.


Algorithm Selection via Ranking

Oentaryo, Richard Jayadi (Singapore Management University) | Handoko, Stephanus Daniel (Singapore Management University) | Lau, Hoong Chuin (Singapore Management University)

AAAI Conferences

The abundance of algorithms developed to solve different problems has given rise to an important research question: How do we choose the best algorithm for a given problem? Known as algorithm selection, this issue has been prevailing in many domains, as no single algorithm can perform best on all problem instances. Traditional algorithm selection and portfolio construction methods typically treat the problem as a classification or regression task. In this paper, we present a new approach that provides a more natural treatment of algorithm selection and portfolio construction as a ranking task. Accordingly, we develop a Ranking-Based Algorithm Selection (RAS) method, which employs a simple polynomial model to capture the ranking of different solvers for different problem instances. We devise an efficient iterative algorithm that can gracefully optimize the polynomial coefficients by minimizing a ranking loss function, which is derived from a sound probabilistic formulation of the ranking problem. Experiments on the SAT 2012 competition dataset show that our approach yields competitive performance to that of more sophisticated algorithm selection methods.